Chromaticity limits in color constancy calculations
نویسندگان
چکیده
There are two very different kinds of color constancy. One kind studies the ability of humans to be insensitive to the spectral composition of scene illumination. The second studies computer vision techniques for calculating the surface reflectances of objects in variable illumination. Camera-measured chromaticity has been used as a tool in computer vision scene analysis. This paper measures the ColorChecker test target in uniform illumination to verify the accuracy of scene capture. We identify the limitations of sRGB camera standards, the dynamic range limits of RAW scene captures, and the presence of camera veiling glare in areas darker than middle gray. Measurements of scene radiances and chromaticities with spot meters are much more accurate than camera capture due to scene-dependent veiling glare. Camera capture data must be verified by calibration. Introduction Studies of color constancy in complex scenes were originally done with simple images that allowed radiometric measurements of all image segments in the field of view. (Land, 1964: Land & McCann, 1971) McCann , McKee and Taylor (1976) introduced the field of Computational Color Constancy, using digital input arrays of 20 by 24 pixels. This size seems ridiculously small now, but it was very large for the time. They showed that: • 1. Observer color constancy matches correlated with Scaled Integrated Reflectance of Mondrian areas calculated using an algorithm that made spatial comparisons. • 2. The subtle changes in departures from perfect constancy were modeled well by cone crosstalk in the spatial comparisons. As digital imaging advanced it became possible to automatically capture arrays of millions of digital values from complex scenes. As well, Computational Color Constancy has split into two distinct domains: • Human Color Constancy (HCC) studies the ability of humans to be insensitive to the spectral composition of scene illumination. Its goal is to calculate the appearance of scene segments given only accurate radiances from each segment. No additional information, such as the radiance of the illumination is required, as in CIECAM models. The ground truth of HCC is the pyschophysical measurement of the appearance of each image segment. • Computer Vision Color Constancy (CVCC) studies techniques for estimating the surface reflectance of objects in variable illumination. Its goal is to separate the reflectance and illumination components from the input array of scene radiances. If successful, these algorithms use the information from the entire scene to find an object's surface reflectance. The ground truth of CVCC is the physical measurement of the surface reflectance of each image segment. Experiments measuring the human appearance of constant surface reflectances show considerable variation depending on scene content. Innumerable examples include: simultaneous contrast, color assimilation, and 3-D Mondrians (Albers, 1962; Parraman, et al., 2009, 2010). Computer vision's goal is to identify the surface, regardless of its appearance to humans. Thus, the two distinct kinds of Color Constancy do not share the same ground truth. They either have different fundamental mechanisms, or they have very different implementations. If they use the same underlying mechanism, then that mechanism would have to compute very different results. A single reflectance surface is seen in HCC to vary considerably with scene content, while the challenge to CVCC is to estimate the same constant reflectance in all scene contents. Image capture A common problem in both HCC and CVCC is the need for accurate data of scene radiances as the input for the models. The early spotmeter technique to measure simple targets was replaced by digital scans of high-dynamic-range film images (McCann, 1988); and more recently by multiple exposures using electronic imaging. Papers by Debevec and Malik (1997), Mitsunaga and Nayar (1999), Robertson et al. (2003), and Grossberg and Nayar (2004) propose calibration methods for standard digital images. Funt & Shi (2010) describe the advantages of using DCRAW software to extract RAW camera data that is linear and closer to the camera sensor's response. Xiong et al. (2012) and Kim et al. (2012) describe techniques for converting standard images to RAW for further processing. The common thread is that these papers attempt to remove the camera response functions from its digital data to measure accurate scene radiances. Surface reflectance by first finding illumination Helmholtz (1924) introduced the idea that constancy could be explained by finding the illumination first. If that were accomplished by some means, the quanta catch of the receptors divided by the quanta catch from the illumination equals a measure of surface reflectance. For Human Color Constancy (HCC) that approach could provide an alternative partial explanation of McCann et al.(1976), but not subsequent vision measurements. (McCann, 2012, chapter 27.5) For CVCC, that approach works within strict bounds imposed on the illumination. Obviously, it can work perfectly in illumination that is both spatially and spectrally uniform. Under these conditions there is a singular description of illumination falling on all objects in the scene. Real scenes do not have uniform illumination. One CVCC approach assumes that the illumination is spectrally uniform, namely the illuminant has only a single spectrum falling on all objects, but variable in intensity. In such uniform spectral illuminants we can use chromaticity a measure of spectral composition to describe any intensity of that spectra. However, if the scene contains more than one spectral illuminant, such as sunlight and skylight, or colored reflections from a colored surface, then a single chromaticity value does not describe the illuminant on all areas. Chromaticity as a Constancy Tool Chromaticity is a tool used frequently in Computer Vision Color Constancy (Funt et al.1998; Finlayson et al. 2001; Ebner, 2007; Yao, 2008; Funt & Shi 2010; Yang et al. 2011; Gevers et al. 2012; Jiang et al. 2012; Ratnasingam et al. 2012) Chromaticity is the projection of the three-dimensional color solid onto a plane defined by the RGB components. Position in the plane, defined by r,g are calculated
منابع مشابه
Color Constancy by Learning to Predict Chromaticity from Luminance
Color constancy is the recovery of true surface color from observed color, and requires estimating the chromaticity of scene illumination to correct for the bias it induces. In this paper, we show that the per-pixel color statistics of natural scenes—without any spatial or semantic context—can by themselves be a powerful cue for color constancy. Specifically, we describe an illuminant estimatio...
متن کاملColor Constancy with Specular and Non-Specular Surfaces
There is a growing trend in machine color constancy research to use only image chromaticity information, ignoring the magnitude of the image pixels. This is natural because the main purpose is often to estimate only the chromaticity of the illuminant. However, the magnitudes of the image pixels also carry information about the chromaticity of the illuminant. One such source of information is th...
متن کاملBootstrapping color constancy
Bootstrapping provides a novel approach to training a neural network to estimate the chromaticity of the illuminant in a scene given image data alone. For initial training, the network requires feedback about the accuracy of the network’s current results. In the case of a network for color constancy, this feedback is the chromaticity of the incident scene illumination. In the past, perfect feed...
متن کاملA Large Image Database for Color Constancy Research
We present a study on various statistics relevant to research on color constancy. Many of these analyses could not have been done before simply because a large database for color constancy was not available. Our image database consists of approximately 11,000 images in which the RGB color of the ambient illuminant in each scene is measured. To build such a large database we used a novel set-up ...
متن کاملColor constancy through inverse-intensity chromaticity space.
Existing color constancy methods cannot handle both uniformly colored surfaces and highly textured surfaces in a single integrated framework. Statistics-based methods require many surface colors and become error prone when there are only a few surface colors. In contrast, dichromatic-based methods can successfully handle uniformly colored surfaces but cannot be applied to highly textured surfac...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013